- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
0001100000000000
- More
- Availability
-
11
- Author / Contributor
- Filter by Author / Creator
-
-
Sun, Mengti (2)
-
Agrawal, Ayush (1)
-
Bianchini, Bibit (1)
-
Biermayer, Kristen (1)
-
Gilroy, Scott (1)
-
Izaguirre, Ed (1)
-
Jiang, Bowen (1)
-
Lau, Derek (1)
-
Li, Zhongyu (1)
-
Posa, Michael (1)
-
Sreenath, Koushil (1)
-
Taylor, Camillo J (1)
-
Xiao, Anxing (1)
-
Yang, Lizhi (1)
-
Zeng, Jun (1)
-
Zhu, Minghan (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
We introduce Vysics, a vision-and-physics framework for a robot to build an expressive geometry and dynamics model of a single rigid body, using a seconds-long RGBD video and the robot’s proprioception. While the computer vision community has built powerful visual 3D perception algorithms, cluttered environments with heavy occlusions can limit the visibility of objects of interest. However, observed motion of partially occluded objects can imply physical interactions took place, such as contact with a robot or the environment. These inferred contacts can supplement the visible geometry with "physible geometry," which best explains the observed object motion through physics. Vysics uses a vision-based tracking and reconstruction method, BundleSDF, to estimate the trajectory and the visible geometry from an RGBD video, and an odometry-based model learning method, Physics Learning Library (PLL), to infer the "physible" geometry from the trajectory through implicit contact dynamics optimization. The visible and "physible" geometries jointly factor into optimizing a signed distance function (SDF) to represent the object shape. Vysics does not require pretraining, nor tactile or force sensors. Compared with vision-only methods, Vysics yields object models with higher geometric accuracy and better dynamics prediction in experiments where the object interacts with the robot and the environment under heavy occlusion.more » « lessFree, publicly-accessible full text available June 21, 2026
-
Gilroy, Scott; Lau, Derek; Yang, Lizhi; Izaguirre, Ed; Biermayer, Kristen; Xiao, Anxing; Sun, Mengti; Agrawal, Ayush; Zeng, Jun; Li, Zhongyu; et al (, IEEE International Conference on Automation Science and Engineering (CASE))
An official website of the United States government
